library(brms)
library(ggplot2)
library(kableExtra)
library(lme4)
library(lmerTest)
library(rmarkdown)
library(performance)
library(see)
library(sjmisc)
library(tidyverse)
options(
digits = 3
)
set.seed(170819)Analyses
Set-up
Packages
Custom functions
# function to silence brms output
hush <-
function(
code
){
sink("/dev/null")
tmp = code
sink()
return(tmp)
}Data
d <- read_csv("data/data.csv")
# same as above; but original file name:
# d <- read_csv("data/DataAggregated_T1T2_costsbenefits.csv")
# load image for work in IDE
# load("data/image.RData")
d <- d |>
rename(
group = roles,
op_expr = n_OE,
gender = DE01_T1,
age = DE02_01_T1,
pol_stance = DE06_01_T1
) |>
mutate(
female = as.logical(2 - gender),
gender = factor(gender, labels = c("female", "male"))
)
# recode to make as sum coding
d$anonymity_dev <- factor(d$anonymity)
contrasts(d$anonymity_dev) <- contr.sum(2)
d$persistence_dev <- factor(d$persistence)
contrasts(d$persistence_dev) <- contr.sum(2)Descriptives
Let’s inspect distribution of opinion expressions.
ggplot(d, aes(op_expr)) +
geom_histogram(binwidth = 1)Looks like a zero-inflated poisson distribution. Confirms our preregistered approach to analyze data using zero-inflated Poisson approach.
nrow(d[d$op_expr == 0, ]) / nrow(d)[1] 0.214
Overall, 21% of participants without any opinion expressions.
Let’s look at distribution of experimental groups.
d |>
select(persistence, anonymity) |>
table() anonymity
persistence high low
high 240 240
low 240 240
Distribution among groups perfect.
d |>
select(topic) |>
table()topic
climate gender migration
320 320 320
Distribution of topics also perfect.
Let’s check if groups are nested in topics:
is_nested(d$topic, d$group)'f2' is nested within 'f1'
[1] TRUE
Indeed the case.
We first look at the experimental group’s descriptives
d |>
group_by(persistence) |>
summarize(op_expr_m = mean(op_expr)) |>
as.data.frame() |>
kable()| persistence | op_expr_m |
|---|---|
| high | 9.26 |
| low | 9.29 |
Looking at persistence, we see there’s virtually no difference among groups.
d |>
group_by(anonymity) |>
summarize(op_expr_m = mean(op_expr)) |>
as.data.frame() |>
kable()| anonymity | op_expr_m |
|---|---|
| high | 9.03 |
| low | 9.52 |
People who with less anonymity communicated more. But the difference isn’t particularly large.
d |>
group_by(persistence, anonymity) |>
summarize(op_expr_m = mean(op_expr)) |>
as.data.frame() |>
kable()`summarise()` has grouped output by 'persistence'. You can override using the
`.groups` argument.
| persistence | anonymity | op_expr_m |
|---|---|---|
| high | high | 9.27 |
| high | low | 9.25 |
| low | high | 8.79 |
| low | low | 9.78 |
Looking at both groups combined, we see that low anonymity and low persistence created highest participation. But differences among groups aren’t large.
d |>
group_by(group) |>
summarize(
anonymity = anonymity[1],
persistence = persistence[1],
topic = topic[1],
op_expr_m = mean(op_expr)
) |>
rmarkdown::paged_table()Looking at the various individual groups, we do see some difference. Generally, this shows that communication varied across groups.
d |>
group_by(topic) |>
summarize(op_expr_m = mean(op_expr)) |>
as.data.frame() |>
kable()| topic | op_expr_m |
|---|---|
| climate | 9.63 |
| gender | 9.96 |
| migration | 8.22 |
Looking at topics specifically, we also see that there’s some variance.
Manipulation Check
Let’s see if respondents indeed perceived the experimental manipulations. Let’s first look at descriptives.
d |>
group_by(persistence) |>
summarize(
"Perceived persistence" = mean(per_persistence, na.rm = TRUE)
) |>
as.data.frame() |>
kable()| persistence | Perceived persistence |
|---|---|
| high | 1.78 |
| low | 4.19 |
The experimental manipulation worked.
model_pers <- lm(
per_persistence ~ persistence_dev,
d
)
summary(model_pers)
Call:
lm(formula = per_persistence ~ persistence_dev, data = d)
Residuals:
Min 1Q Median 3Q Max
-3.194 -0.777 -0.110 0.806 3.223
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.9854 0.0378 79.1 <2e-16 ***
persistence_dev1 -1.2085 0.0378 -32.0 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1 on 703 degrees of freedom
(255 observations deleted due to missingness)
Multiple R-squared: 0.593, Adjusted R-squared: 0.592
F-statistic: 1.02e+03 on 1 and 703 DF, p-value: <2e-16
The difference was statistically significant. Let’s now look at anonymity.
d |>
group_by(anonymity) |>
summarize(
"Perceived anonymity" = mean(per_anonymity, na.rm = TRUE)
) |>
as.data.frame() |>
kable()| anonymity | Perceived anonymity |
|---|---|
| high | 1.22 |
| low | 4.42 |
The experimental manipulation worked.
model_anon <- lm(
per_anonymity ~ anonymity_dev,
d
)
summary(model_anon)
Call:
lm(formula = per_anonymity ~ anonymity_dev, data = d)
Residuals:
Min 1Q Median 3Q Max
-3.425 -0.216 -0.216 0.575 3.784
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.8204 0.0348 81.0 <2e-16 ***
anonymity_dev1 -1.6042 0.0348 -46.1 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.924 on 703 degrees of freedom
(255 observations deleted due to missingness)
Multiple R-squared: 0.751, Adjusted R-squared: 0.751
F-statistic: 2.12e+03 on 1 and 703 DF, p-value: <2e-16
The experimental manipulation worked.
Random Allocation
Let’s check if random allocation across experimental conditions worked alright. Let’s first look at descriptives.
d |>
group_by(persistence, anonymity) |>
summarize(
female_m = mean(female)
, age_m = mean(age)
, pol_stance_m = mean(pol_stance)
) |>
as.data.frame() |>
kable()`summarise()` has grouped output by 'persistence'. You can override using the
`.groups` argument.
| persistence | anonymity | female_m | age_m | pol_stance_m |
|---|---|---|---|---|
| high | high | 0.613 | 43.2 | 5.57 |
| high | low | 0.662 | 39.6 | 5.44 |
| low | high | 0.558 | 41.7 | 5.76 |
| low | low | 0.683 | 40.8 | 5.15 |
There seem to be some differences. Let’s inspect manually.
model_persis <- lm(
as.integer(persistence_dev) ~ age + gender + pol_stance,
d
)
summary(model_persis)
Call:
lm(formula = as.integer(persistence_dev) ~ age + gender + pol_stance,
data = d)
Residuals:
Min 1Q Median 3Q Max
-0.5352 -0.4994 -0.0006 0.4994 0.5385
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.536577 0.072667 21.15 <2e-16 ***
age -0.000632 0.001402 -0.45 0.65
gendermale 0.022885 0.034672 0.66 0.51
pol_stance -0.003460 0.008089 -0.43 0.67
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.501 on 956 degrees of freedom
Multiple R-squared: 0.000702, Adjusted R-squared: -0.00243
F-statistic: 0.224 on 3 and 956 DF, p-value: 0.88
Allocation of gender, age, and political stance on the two persistence groups was successful.
model_anon <- lm(
as.integer(anonymity_dev) ~ age + gender + pol_stance,
d
)
summary(model_anon)
Call:
lm(formula = as.integer(anonymity_dev) ~ age + gender + pol_stance,
data = d)
Residuals:
Min 1Q Median 3Q Max
-0.6702 -0.4902 0.0125 0.4874 0.7263
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.77558 0.07192 24.69 <2e-16 ***
age -0.00323 0.00139 -2.33 0.0201 *
gendermale -0.06686 0.03432 -1.95 0.0517 .
pol_stance -0.02142 0.00801 -2.68 0.0076 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.496 on 956 degrees of freedom
Multiple R-squared: 0.0211, Adjusted R-squared: 0.018
F-statistic: 6.87 on 3 and 956 DF, p-value: 0.00014
However, allocation across anonymity groups wasn’t successful. In the subsequent analyses, let’s hence control for these sociodemographic variables.
Bayesian mixed effects modeling
We analyze the data using Bayesian modelling.
We use deviation/sum contrast coding (-.1, .1). Meaning, contrasts measure main effects of independent variables.
Fixed effects
We preregistered to analyze fixed effects.
fit_fe_1 <-
hush(
brm(
op_expr ~
1 + persistence_dev * anonymity_dev + age + female + pol_stance +
(1 | topic/group)
, data = d
, chains = 4
, cores = 4
, iter = 6000
, warmup = 2000
, family = zero_inflated_poisson()
, control = list(
adapt_delta = .95
, max_treedepth = 12
)
, save_pars = save_pars(all = TRUE)
, silent = 2
)
)Warning: There were 330 divergent transitions after warmup. See
https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
to find out why this is a problem and how to eliminate them.
Warning: Examine the pairs() plot to diagnose sampling problems
Shows some convergence warnings. Let’s inspect model.
plot(fit_fe_1, ask = FALSE)Trace-plots look alright.
Let’s look at results.
summary(fit_fe_1)Warning: There were 330 divergent transitions after warmup. Increasing
adapt_delta above 0.95 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
Family: zero_inflated_poisson
Links: mu = log; zi = identity
Formula: op_expr ~ 1 + persistence_dev * anonymity_dev + age + female + pol_stance + (1 | topic/group)
Data: d (Number of observations: 960)
Draws: 4 chains, each with iter = 6000; warmup = 2000; thin = 1;
total post-warmup draws = 16000
Multilevel Hyperparameters:
~topic (Number of levels: 3)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.27 0.32 0.01 1.21 1.00 1879 1967
~topic:group (Number of levels: 48)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.40 0.05 0.32 0.50 1.00 2529 4636
Regression Coefficients:
Estimate Est.Error l-95% CI u-95% CI Rhat
Intercept 1.99 0.22 1.49 2.47 1.00
persistence_dev1 -0.01 0.06 -0.12 0.11 1.01
anonymity_dev1 -0.01 0.06 -0.13 0.11 1.00
age 0.01 0.00 0.01 0.01 1.00
femaleTRUE -0.00 0.02 -0.05 0.04 1.00
pol_stance -0.02 0.01 -0.03 -0.01 1.00
persistence_dev1:anonymity_dev1 0.02 0.06 -0.09 0.14 1.00
Bulk_ESS Tail_ESS
Intercept 2441 1383
persistence_dev1 2894 4760
anonymity_dev1 2785 4618
age 13034 8835
femaleTRUE 6554 5051
pol_stance 13079 10695
persistence_dev1:anonymity_dev1 3066 3327
Further Distributional Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
zi 0.21 0.01 0.19 0.24 1.00 11038 9513
Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
No significant effect emerged.
Let’s inspect ICC
var_ratio_fe <- performance::variance_decomposition(
fit_fe_1
, by_group = TRUE)
var_ratio_fe# Random Effect Variances and ICC
Conditioned on: all random effects
## Variance Ratio (comparable to ICC)
Ratio: 0.42 CI 95%: [-0.33 0.74]
## Variances of Posterior Predicted Distribution
Conditioned on fixed effects: 31.87 CI 95%: [14.10 73.13]
Conditioned on rand. effects: 55.05 CI 95%: [49.82 60.88]
## Difference in Variances
Difference: 23.13 CI 95%: [-18.27 41.30]
42.145 percent of variance in opinion expressions explained by both topics and groups.
Let’s visualize results to see what they exactly mean.
p <- plot(
conditional_effects(
fit_fe_1
),
ask = FALSE,
plot = FALSE
)
p_anon <-
p[["anonymity_dev"]] +
xlab("Anonymity") +
ylab("Opinion expression") +
scale_x_discrete(
limits = rev
) +
scale_y_continuous(
limits = c(5, 14)
, breaks = c(6, 8, 10, 12, 14)
)
p_pers <-
p[["persistence_dev"]] +
xlab("Persistence") +
ylab("Opinion expression") +
scale_x_discrete(
limits = rev
) +
scale_y_continuous(
limits = c(5, 14)
, breaks = c(6, 8, 10, 12, 14)
) +
theme(
axis.title.y = element_blank()
)
p_int <-
p[["persistence_dev:anonymity_dev"]] +
xlab("Persistence") +
scale_x_discrete(
limits = rev
) +
scale_color_discrete(
labels = c("low", "high")
) +
guides(
fill = "none",
color = guide_legend(
title = "Anonymity"
)
) +
theme(
axis.title.y = element_blank()
) +
scale_y_continuous(
limits = c(5, 14)
, breaks = c(6, 8, 10, 12, 14)
)
plot <- cowplot::plot_grid(
p_anon, p_pers, p_int,
labels = c('A', 'B', "C"),
nrow = 1,
rel_widths = c(2, 2, 3)
)
plotggsave("figures/results.png", plot, width = 8, height = 4)Shows that there are no main effects. There seems to be a (nonsignificant) interaction effect. In low persistence environment, anonymity is conducive to communication; in high it’s the opposite.
Let’s look at posteriors
p_1 <-
pp_check(fit_fe_1) +
labs(title = "Zero-inflated poisson")Using 10 posterior draws for ppc type 'dens_overlay' by default.
p_1The actual distribution cannot be precisely reproduced, but it’s also not too far off.
Random effects
We preregistered to explore and compare models with random effects. So let’s model how the experimental conditions affect the outcomes differently depending on topic.
fit_re_1 <-
hush(
brm(
op_expr ~
1 + persistence_dev * anonymity_dev + age + female + pol_stance +
(1 + persistence_dev * anonymity_dev | topic) +
(1 | topic:group)
, data = d
, chains = 4
, cores = 4
, iter = 6000
, warmup = 2000
, family = zero_inflated_poisson()
, control = list(
adapt_delta = .95
, max_treedepth = 15
)
, save_pars = save_pars(all = TRUE)
)
)Warning: There were 510 divergent transitions after warmup. See
https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
to find out why this is a problem and how to eliminate them.
Warning: Examine the pairs() plot to diagnose sampling problems
Shows some convergence warnings.
Let’s inspect model.
plot(fit_re_1, ask = FALSE)Traceplots look alright.
Let’s look at results.
summary(fit_re_1)Warning: There were 1281 divergent transitions after warmup. Increasing
adapt_delta above 0.95 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
Family: zero_inflated_poisson
Links: mu = log; zi = identity
Formula: op_expr ~ 1 + persistence_dev * anonymity_dev + (1 + persistence_dev * anonymity_dev | topic) + (1 | topic:group)
Data: d (Number of observations: 960)
Draws: 4 chains, each with iter = 8000; warmup = 2000; thin = 1;
total post-warmup draws = 24000
Multilevel Hyperparameters:
~topic (Number of levels: 3)
Estimate Est.Error
sd(Intercept) 0.36 0.46
sd(persistence_dev1) 0.25 0.35
sd(anonymity_dev1) 0.24 0.34
sd(persistence_dev1:anonymity_dev1) 0.41 0.46
cor(Intercept,persistence_dev1) -0.00 0.46
cor(Intercept,anonymity_dev1) 0.01 0.46
cor(persistence_dev1,anonymity_dev1) -0.01 0.47
cor(Intercept,persistence_dev1:anonymity_dev1) 0.08 0.46
cor(persistence_dev1,persistence_dev1:anonymity_dev1) 0.02 0.46
cor(anonymity_dev1,persistence_dev1:anonymity_dev1) -0.01 0.46
l-95% CI u-95% CI Rhat
sd(Intercept) 0.01 1.60 1.00
sd(persistence_dev1) 0.01 1.21 1.00
sd(anonymity_dev1) 0.00 1.20 1.00
sd(persistence_dev1:anonymity_dev1) 0.02 1.69 1.00
cor(Intercept,persistence_dev1) -0.83 0.83 1.00
cor(Intercept,anonymity_dev1) -0.83 0.83 1.00
cor(persistence_dev1,anonymity_dev1) -0.83 0.83 1.00
cor(Intercept,persistence_dev1:anonymity_dev1) -0.78 0.86 1.00
cor(persistence_dev1,persistence_dev1:anonymity_dev1) -0.83 0.84 1.00
cor(anonymity_dev1,persistence_dev1:anonymity_dev1) -0.83 0.83 1.00
Bulk_ESS Tail_ESS
sd(Intercept) 6620 11752
sd(persistence_dev1) 7764 10211
sd(anonymity_dev1) 7280 9370
sd(persistence_dev1:anonymity_dev1) 6024 5585
cor(Intercept,persistence_dev1) 21881 16269
cor(Intercept,anonymity_dev1) 22853 15065
cor(persistence_dev1,anonymity_dev1) 19146 13364
cor(Intercept,persistence_dev1:anonymity_dev1) 22045 15452
cor(persistence_dev1,persistence_dev1:anonymity_dev1) 19117 15472
cor(anonymity_dev1,persistence_dev1:anonymity_dev1) 15719 17072
~topic:group (Number of levels: 48)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.40 0.05 0.32 0.51 1.00 7776 12613
Regression Coefficients:
Estimate Est.Error l-95% CI u-95% CI Rhat
Intercept 2.38 0.27 1.79 2.95 1.00
persistence_dev1 -0.00 0.21 -0.41 0.44 1.00
anonymity_dev1 0.00 0.22 -0.44 0.44 1.00
persistence_dev1:anonymity_dev1 0.02 0.33 -0.67 0.71 1.00
Bulk_ESS Tail_ESS
Intercept 10960 9291
persistence_dev1 10694 8366
anonymity_dev1 9075 6054
persistence_dev1:anonymity_dev1 8583 5583
Further Distributional Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
zi 0.21 0.01 0.19 0.24 1.00 32167 15762
Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Again, no main or interaction effects.
Let’s see if the random effects model fits better
fit_fe_1 <- add_criterion(
fit_fe_1
, "kfold"
, K = 10
, cores = 4
)Fitting model 1 out of 10
Start sampling
Fitting model 2 out of 10
Start sampling
Fitting model 3 out of 10
Start sampling
Fitting model 4 out of 10
Start sampling
Fitting model 5 out of 10
Start sampling
Fitting model 6 out of 10
Start sampling
Fitting model 7 out of 10
Start sampling
Fitting model 8 out of 10
Start sampling
Fitting model 9 out of 10
Start sampling
Fitting model 10 out of 10
Start sampling
fit_re_1 <- add_criterion(
fit_re_1
, "kfold"
, K = 10
, cores = 4
)Fitting model 1 out of 10
Start sampling
Fitting model 2 out of 10
Start sampling
Fitting model 3 out of 10
Start sampling
Fitting model 4 out of 10
Start sampling
Fitting model 5 out of 10
Start sampling
Fitting model 6 out of 10
Start sampling
Fitting model 7 out of 10
Start sampling
Fitting model 8 out of 10
Start sampling
Fitting model 9 out of 10
Start sampling
Fitting model 10 out of 10
Start sampling
comp_1 <- loo_compare(fit_fe_1, fit_re_1, criterion = "kfold")
comp_1 elpd_diff se_diff
fit_re_1 0.0 0.0
fit_fe_1 -16.5 29.9
Although model comparisons showed that the model with random effects fitted better, the difference was not significant (Δ ELPD = -16.46, 95% CI [-75.054, 42.139]. Hence, for reasons of parsimony the model with fixed effects is preferred.
Hurdle
Let’s now estimate a fixed effects model with hurdles.
fit_hrdl_1 <-
hush(
brm(
bf(
op_expr ~
1 + persistence_dev * anonymity_dev + age + female + pol_stance +
(1 | topic) +
(1 | topic:group),
zi ~
1 + persistence_dev * anonymity_dev + age + female + pol_stance +
(1 | topic) +
(1 | topic:group)
)
, data = d
, chains = 4
, cores = 4
, iter = 6000
, warmup = 2000
, family = zero_inflated_poisson()
, control = list(
adapt_delta = .95
, max_treedepth = 15
)
)
)Warning: There were 4188 divergent transitions after warmup. See
https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
to find out why this is a problem and how to eliminate them.
Warning: Examine the pairs() plot to diagnose sampling problems
Warning: The largest R-hat is 1.57, indicating chains have not mixed.
Running the chains for more iterations may help. See
https://mc-stan.org/misc/warnings.html#r-hat
Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
Running the chains for more iterations may help. See
https://mc-stan.org/misc/warnings.html#bulk-ess
Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
Running the chains for more iterations may help. See
https://mc-stan.org/misc/warnings.html#tail-ess
Agian, some warnings.
Let’s inspect model.
plot(fit_hrdl_1, ask = FALSE)Trace-plots look alright.
summary(fit_hrdl_1)Warning: There were 601 divergent transitions after warmup. Increasing
adapt_delta above 0.95 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
Family: zero_inflated_poisson
Links: mu = log; zi = logit
Formula: op_expr ~ 1 + persistence_dev * anonymity_dev + (1 | topic) + (1 | topic:group)
zi ~ 1 + persistence_dev * anonymity_dev + (1 | topic) + (1 | topic:group)
Data: d (Number of observations: 960)
Draws: 4 chains, each with iter = 8000; warmup = 2000; thin = 1;
total post-warmup draws = 24000
Multilevel Hyperparameters:
~topic (Number of levels: 3)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.27 0.32 0.01 1.17 1.00 2245 1183
sd(zi_Intercept) 0.28 0.44 0.01 1.50 1.00 6240 5389
~topic:group (Number of levels: 48)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.40 0.05 0.32 0.50 1.00 5272 10476
sd(zi_Intercept) 0.24 0.14 0.01 0.53 1.00 5361 8349
Regression Coefficients:
Estimate Est.Error l-95% CI u-95% CI Rhat
Intercept 2.39 0.20 1.96 2.83 1.00
zi_Intercept -1.30 0.28 -1.74 -0.67 1.00
persistence_dev1 -0.01 0.06 -0.12 0.11 1.00
anonymity_dev1 0.00 0.06 -0.12 0.12 1.00
persistence_dev1:anonymity_dev1 0.03 0.06 -0.09 0.15 1.00
zi_persistence_dev1 0.03 0.09 -0.15 0.20 1.00
zi_anonymity_dev1 0.01 0.09 -0.17 0.18 1.00
zi_persistence_dev1:anonymity_dev1 0.01 0.09 -0.16 0.18 1.00
Bulk_ESS Tail_ESS
Intercept 2994 1307
zi_Intercept 7245 4630
persistence_dev1 4984 9340
anonymity_dev1 5664 9023
persistence_dev1:anonymity_dev1 5982 9817
zi_persistence_dev1 19443 9630
zi_anonymity_dev1 22174 16988
zi_persistence_dev1:anonymity_dev1 10375 7268
Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Same results, no main effects, slightly larger but still nonsignificant interaction effect.
Exploratory Analyses
Frequentist
Look at results from a frequentist perspective.
Fixed effects
Estimate nested model.
fit_fe_1_frq <-
lmer(
op_expr ~
1 +
(1 | topic/group) +
persistence_dev * anonymity_dev + age + female + pol_stance
, data = d
)
summary(fit_fe_1_frq)Linear mixed model fit by REML ['lmerMod']
Formula: op_expr ~ 1 + (1 | topic/group) + persistence_dev * anonymity_dev +
age + female + pol_stance
Data: d
REML criterion at convergence: 7604
Scaled residuals:
Min 1Q Median 3Q Max
-1.484 -0.555 -0.196 0.264 13.549
Random effects:
Groups Name Variance Std.Dev.
group:topic (Intercept) 1.08e+01 3.28e+00
topic (Intercept) 8.92e-08 2.99e-04
Residual 1.55e+02 1.25e+01
Number of obs: 960, groups: group:topic, 48; topic, 3
Fixed effects:
Estimate Std. Error t value
(Intercept) 6.0450 2.1478 2.81
persistence_dev1 -0.0197 0.6214 -0.03
anonymity_dev1 -0.3437 0.6242 -0.55
age 0.0860 0.0357 2.41
femaleTRUE -0.2434 0.8783 -0.28
pol_stance -0.0319 0.2058 -0.16
persistence_dev1:anonymity_dev1 0.1953 0.6226 0.31
Correlation of Fixed Effects:
(Intr) prss_1 anny_1 age fmTRUE pl_stn
prsstnc_dv1 0.015
annymty_dv1 0.053 0.000
age -0.750 -0.010 -0.049
femaleTRUE -0.463 -0.014 0.041 0.252
pol_stance -0.538 -0.009 -0.057 -0.003 0.062
prsstn_1:_1 0.019 0.000 -0.002 -0.045 -0.034 0.038
optimizer (nloptwrap) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')
Quite weird that topic doesn’t get any variance at all. Perhaps due to small cluster size? With Bayesian estimation, it worked alright. Also, again no significant effects.
Estimate without nesting.
fit_fe_2_frq <-
lmer(
op_expr ~
1 +
(1 | group) +
persistence_dev * anonymity_dev + age + female + pol_stance + topic
, data = d
)
summary(fit_fe_2_frq)Linear mixed model fit by REML. t-tests use Satterthwaite's method [
lmerModLmerTest]
Formula: op_expr ~ 1 + (1 | group) + persistence_dev * anonymity_dev +
age + female + pol_stance + topic
Data: d
REML criterion at convergence: 7598
Scaled residuals:
Min 1Q Median 3Q Max
-1.513 -0.548 -0.188 0.265 13.577
Random effects:
Groups Name Variance Std.Dev.
group (Intercept) 11 3.32
Residual 155 12.46
Number of obs: 960, groups: group, 48
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 6.3760 2.3261 511.8117 2.74 0.0063
persistence_dev1 -0.0195 0.6257 41.8742 -0.03 0.9753
anonymity_dev1 -0.3439 0.6284 42.6053 -0.55 0.5871
age 0.0854 0.0357 942.0312 2.39 0.0169
femaleTRUE -0.2641 0.8788 938.1103 -0.30 0.7638
pol_stance -0.0324 0.2058 941.3972 -0.16 0.8751
topicgender 0.4373 1.5334 41.9702 0.29 0.7769
topicmigration -1.3078 1.5329 41.9180 -0.85 0.3984
persistence_dev1:anonymity_dev1 0.1960 0.6268 42.1850 0.31 0.7561
(Intercept) **
persistence_dev1
anonymity_dev1
age *
femaleTRUE
pol_stance
topicgender
topicmigration
persistence_dev1:anonymity_dev1
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation of Fixed Effects:
(Intr) prss_1 anny_1 age fmTRUE pl_stn tpcgnd tpcmgr
prsstnc_dv1 0.014
annymty_dv1 0.049 0.000
age -0.701 -0.010 -0.049
femaleTRUE -0.423 -0.014 0.041 0.252
pol_stance -0.489 -0.009 -0.057 -0.004 0.063
topicgender -0.325 0.000 -0.001 0.017 -0.024 -0.020
topicmigrtn -0.337 0.000 -0.001 0.024 0.000 -0.016 0.500
prsstn_1:_1 0.018 0.000 -0.001 -0.045 -0.033 0.038 -0.001 -0.002
Also shows no significant effects.
For curiosity, estimate also without hierarchical structure.
fit_fe_3_frq <-
lm(
op_expr ~
1 +
persistence_dev * anonymity_dev + topic + age + female + pol_stance
, data = d
)
summary(fit_fe_3_frq)
Call:
lm(formula = op_expr ~ 1 + persistence_dev * anonymity_dev +
topic + age + female + pol_stance, data = d)
Residuals:
Min 1Q Median 3Q Max
-12.95 -7.46 -2.92 3.00 177.59
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.7503 2.2095 2.60 0.0094 **
persistence_dev1 -0.0223 0.4148 -0.05 0.9571
anonymity_dev1 -0.3598 0.4191 -0.86 0.3908
topicgender 0.4291 1.0174 0.42 0.6733
topicmigration -1.3114 1.0166 -1.29 0.1974
age 0.0901 0.0362 2.49 0.0129 *
femaleTRUE -0.2010 0.8932 -0.23 0.8220
pol_stance 0.0398 0.2087 0.19 0.8489
persistence_dev1:anonymity_dev1 0.2004 0.4166 0.48 0.6306
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 12.8 on 951 degrees of freedom
Multiple R-squared: 0.0115, Adjusted R-squared: 0.00315
F-statistic: 1.38 on 8 and 951 DF, p-value: 0.202
Also here, no significant effects.
Gender
As preregistered, let’s see if effects differ across genders.
fit_fe_gen <-
hush(
brm(
op_expr ~
1 + persistence_dev * anonymity_dev * gender + age + pol_stance +
(1 | topic/group)
, data = d
, chains = 4
, cores = 4
, iter = 8000
, warmup = 2000
, family = zero_inflated_poisson()
, control = list(
adapt_delta = .95
, max_treedepth = 12
)
)
)Warning: There were 3844 divergent transitions after warmup. See
https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
to find out why this is a problem and how to eliminate them.
Warning: Examine the pairs() plot to diagnose sampling problems
Warning: The largest R-hat is 1.32, indicating chains have not mixed.
Running the chains for more iterations may help. See
https://mc-stan.org/misc/warnings.html#r-hat
Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
Running the chains for more iterations may help. See
https://mc-stan.org/misc/warnings.html#bulk-ess
Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
Running the chains for more iterations may help. See
https://mc-stan.org/misc/warnings.html#tail-ess
Again, some warnings.
Let’s inspect model.
plot(fit_fe_gen, ask = FALSE)Traceplots look alright.
Let’s look at results.
summary(fit_fe_gen)Warning: There were 319 divergent transitions after warmup. Increasing
adapt_delta above 0.95 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
Family: zero_inflated_poisson
Links: mu = log; zi = identity
Formula: op_expr ~ 1 + persistence_dev * anonymity_dev * gender + age + pol_stance + (1 | topic/group)
Data: d (Number of observations: 960)
Draws: 4 chains, each with iter = 6000; warmup = 2000; thin = 1;
total post-warmup draws = 16000
Multilevel Hyperparameters:
~topic (Number of levels: 3)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.24 0.28 0.01 1.08 1.00 1881 1668
~topic:group (Number of levels: 48)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.40 0.05 0.32 0.50 1.00 2824 4724
Regression Coefficients:
Estimate Est.Error l-95% CI u-95% CI
Intercept 2.00 0.18 1.61 2.40
persistence_dev1 0.02 0.06 -0.09 0.14
anonymity_dev1 -0.04 0.06 -0.15 0.08
gendermale -0.00 0.02 -0.05 0.04
age 0.01 0.00 0.01 0.01
pol_stance -0.02 0.01 -0.03 -0.01
persistence_dev1:anonymity_dev1 0.01 0.06 -0.11 0.12
persistence_dev1:gendermale -0.08 0.02 -0.12 -0.03
anonymity_dev1:gendermale 0.07 0.02 0.03 0.12
persistence_dev1:anonymity_dev1:gendermale 0.05 0.02 0.01 0.09
Rhat Bulk_ESS Tail_ESS
Intercept 1.00 2586 1884
persistence_dev1 1.00 2815 4790
anonymity_dev1 1.00 2742 4434
gendermale 1.00 10194 10160
age 1.00 21298 12326
pol_stance 1.00 11116 9270
persistence_dev1:anonymity_dev1 1.00 2857 4325
persistence_dev1:gendermale 1.00 10315 8940
anonymity_dev1:gendermale 1.00 9975 9554
persistence_dev1:anonymity_dev1:gendermale 1.00 10847 10513
Further Distributional Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
zi 0.21 0.01 0.19 0.24 1.00 11016 9500
Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Indeed, several gender effects.
- For females, the effect of persistence is larger, that is more positive.
- For females, the effect of anonymity is smaller, that is more negative.
- For females, the interaction effect is also a bit smaller, that is more negative.
Let’s visualize results.
p_gen <- plot(
conditional_effects(
fit_fe_gen
),
ask = FALSE,
plot = FALSE
)
p_gen_pers <-
p_gen[["persistence_dev:gender"]] +
xlab("Persistence") +
ylab("Opinion expression") +
scale_y_continuous(
limits = c(4, 15),
breaks = c(5, 7.5, 10, 12.5, 15)
) +
scale_x_discrete(
limits = rev
) +
guides(
fill = "none"
, color = "none"
)
p_gen_anon <-
p_gen[["anonymity_dev:gender"]] +
xlab("Anonymity") +
ylab("Opinion expression") +
scale_y_continuous(
limits = c(3.5, 15),
breaks = c(5, 7.5, 10, 12.5, 15)
) +
theme(
axis.title.y = element_blank()
) +
guides(
fill = "none"
) +
scale_x_discrete(
limits = rev
) +
scale_color_discrete(
name = "Gender"
)
plot_gen <- cowplot::plot_grid(
p_gen_pers, p_gen_anon,
labels = c('A', 'B'),
nrow = 1,
rel_widths = c(4, 5)
)
plot_genggsave("figures/results_gen.png", plot_gen, width = 8, height = 4)Benefits
Let’s see if benefits differ across experimental groups.
We first look at the experimental group’s descriptives
d |>
group_by(persistence) |>
summarize(benefits_m = mean(benefits, na.rm = TRUE)) |>
as.data.frame() |>
kable()| persistence | benefits_m |
|---|---|
| high | 3.12 |
| low | 3.23 |
Looking at persistence, we see people with lower persistence reporting slightly higher benefits.
d |>
group_by(anonymity) |>
summarize(benefits_m = mean(benefits, na.rm = TRUE)) |>
as.data.frame() |>
kable()| anonymity | benefits_m |
|---|---|
| high | 3.15 |
| low | 3.20 |
Looking at anonymity, we see people with low anonymity reporting marginally higher benefits.
d |>
group_by(persistence, anonymity) |>
summarize(benefits_m = mean(benefits, na.rm = T)) |>
as.data.frame() |>
kable()`summarise()` has grouped output by 'persistence'. You can override using the
`.groups` argument.
| persistence | anonymity | benefits_m |
|---|---|---|
| high | high | 3.07 |
| high | low | 3.18 |
| low | high | 3.22 |
| low | low | 3.23 |
Looking at both groups combined, we see that low anonymity and low persistence yielded highest benefits.
Let’s look if effects are significant.
fit_fe_ben_1 <-
hush(
brm(
benefits ~
1 + persistence_dev * anonymity_dev + age + female + pol_stance +
(1 | topic/group)
, data = d
, chains = 4
, cores = 4
, iter = 6000
, warmup = 2000
, control = list(
adapt_delta = .95
, max_treedepth = 12
)
)
)Warning: Rows containing NAs were excluded from the model.
Warning: There were 118 divergent transitions after warmup. See
https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
to find out why this is a problem and how to eliminate them.
Warning: Examine the pairs() plot to diagnose sampling problems
Let’s inspect model.
plot(fit_fe_ben_1, ask = FALSE)Traceplots look alright.
Let’s look at results.
summary(fit_fe_ben_1)Warning: There were 247 divergent transitions after warmup. Increasing
adapt_delta above 0.95 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
Family: gaussian
Links: mu = identity; sigma = identity
Formula: benefits ~ 1 + persistence_dev * anonymity_dev + age + female + pol_stance + (1 | topic/group)
Data: d (Number of observations: 705)
Draws: 4 chains, each with iter = 6000; warmup = 2000; thin = 1;
total post-warmup draws = 16000
Multilevel Hyperparameters:
~topic (Number of levels: 3)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.13 0.19 0.00 0.73 1.00 1100 369
~topic:group (Number of levels: 48)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.07 0.05 0.00 0.17 1.00 3788 5227
Regression Coefficients:
Estimate Est.Error l-95% CI u-95% CI Rhat
Intercept 3.22 0.18 2.89 3.60 1.01
persistence_dev1 -0.05 0.03 -0.11 0.01 1.00
anonymity_dev1 -0.03 0.03 -0.09 0.03 1.00
age -0.00 0.00 -0.01 0.00 1.00
femaleTRUE -0.07 0.06 -0.19 0.04 1.00
pol_stance 0.02 0.01 -0.01 0.04 1.00
persistence_dev1:anonymity_dev1 -0.02 0.03 -0.08 0.04 1.00
Bulk_ESS Tail_ESS
Intercept 856 306
persistence_dev1 14485 11192
anonymity_dev1 7778 8443
age 8196 6231
femaleTRUE 18094 11134
pol_stance 17172 9460
persistence_dev1:anonymity_dev1 7262 5633
Further Distributional Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 0.74 0.02 0.70 0.78 1.00 13329 8099
Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
No significant effects. But note that effect of persistence on perceived benefits only marginally not significant.
Costs
Let’s see if perceived differed across experimental groups.
We first look at the experimental group’s descriptives
d |>
group_by(persistence) |>
summarize(costs = mean(costs, na.rm = TRUE)) |>
as.data.frame() |>
kable()| persistence | costs |
|---|---|
| high | 1.99 |
| low | 1.99 |
Looking at persistence, we see both groups report equal costs.
d |>
group_by(anonymity) |>
summarize(costs = mean(costs, na.rm = TRUE)) |>
as.data.frame() |>
kable()| anonymity | costs |
|---|---|
| high | 1.89 |
| low | 2.09 |
Looking at anonymity, we see people with low anonymity report slightly higher costs.
d |>
group_by(persistence, anonymity) |>
summarize(costs = mean(costs, na.rm = TRUE)) |>
as.data.frame() |>
kable()`summarise()` has grouped output by 'persistence'. You can override using the
`.groups` argument.
| persistence | anonymity | costs |
|---|---|---|
| high | high | 1.90 |
| high | low | 2.07 |
| low | high | 1.87 |
| low | low | 2.11 |
Looking at both groups combined, we see that highest costs were reported by group with low anonymity and low persistence.
Let’s look if effects are significant.
fit_fe_costs_1 <-
hush(
brm(
costs ~
1 + persistence_dev * anonymity_dev + age + female + pol_stance +
(1 | topic/group)
, data = d
, chains = 4
, cores = 4
, iter = 8000
, warmup = 2000
, control = list(
adapt_delta = .95
, max_treedepth = 12
)
)
)Warning: Rows containing NAs were excluded from the model.
Warning: There were 232 divergent transitions after warmup. See
https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
to find out why this is a problem and how to eliminate them.
Warning: Examine the pairs() plot to diagnose sampling problems
Let’s inspect model.
plot(fit_fe_costs_1, ask = FALSE)Traceplots look alright.
Let’s look at results.
summary(fit_fe_costs_1)Warning: There were 90 divergent transitions after warmup. Increasing
adapt_delta above 0.95 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
Family: gaussian
Links: mu = identity; sigma = identity
Formula: costs ~ 1 + persistence_dev * anonymity_dev + age + female + pol_stance + (1 | topic/group)
Data: d (Number of observations: 705)
Draws: 4 chains, each with iter = 6000; warmup = 2000; thin = 1;
total post-warmup draws = 16000
Multilevel Hyperparameters:
~topic (Number of levels: 3)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.13 0.18 0.00 0.66 1.00 3243 2917
~topic:group (Number of levels: 48)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.07 0.05 0.00 0.17 1.00 4567 6211
Regression Coefficients:
Estimate Est.Error l-95% CI u-95% CI Rhat
Intercept 2.48 0.19 2.11 2.85 1.00
persistence_dev1 0.00 0.03 -0.07 0.07 1.00
anonymity_dev1 -0.08 0.03 -0.15 -0.02 1.00
age -0.01 0.00 -0.02 -0.01 1.00
femaleTRUE 0.01 0.07 -0.12 0.14 1.00
pol_stance -0.00 0.02 -0.04 0.03 1.00
persistence_dev1:anonymity_dev1 0.02 0.03 -0.05 0.09 1.00
Bulk_ESS Tail_ESS
Intercept 7441 5033
persistence_dev1 16973 10458
anonymity_dev1 15824 12001
age 23160 12259
femaleTRUE 17259 11422
pol_stance 19931 10211
persistence_dev1:anonymity_dev1 13347 6847
Further Distributional Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 0.84 0.02 0.80 0.89 1.00 17064 8181
Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
We find that anonymity does reduce costs.
Mediation
Let’s see if perceived benefits and costs were associated with increased opinion expressions.
fit_fe_med <-
hush(
brm(
op_expr ~
1 + persistence_dev * anonymity_dev + benefits + costs + age + female + pol_stance +
(1 | topic/group)
, data = d
, chains = 4
, cores = 4
, iter = 6000
, warmup = 2000
, family = zero_inflated_poisson()
, control = list(
adapt_delta = .95
, max_treedepth = 12
)
)
)Warning: Rows containing NAs were excluded from the model.
Warning: There were 1389 divergent transitions after warmup. See
https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
to find out why this is a problem and how to eliminate them.
Warning: Examine the pairs() plot to diagnose sampling problems
Warning: The largest R-hat is 1.1, indicating chains have not mixed.
Running the chains for more iterations may help. See
https://mc-stan.org/misc/warnings.html#r-hat
Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
Running the chains for more iterations may help. See
https://mc-stan.org/misc/warnings.html#bulk-ess
Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
Running the chains for more iterations may help. See
https://mc-stan.org/misc/warnings.html#tail-ess
Let’s look at results.
summary(fit_fe_med)Warning: There were 237 divergent transitions after warmup. Increasing
adapt_delta above 0.95 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
Family: zero_inflated_poisson
Links: mu = log; zi = identity
Formula: op_expr ~ 1 + persistence_dev * anonymity_dev + benefits + costs + age + female + pol_stance + (1 | topic/group)
Data: d (Number of observations: 705)
Draws: 4 chains, each with iter = 6000; warmup = 2000; thin = 1;
total post-warmup draws = 16000
Multilevel Hyperparameters:
~topic (Number of levels: 3)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.21 0.25 0.01 0.94 1.00 2495 4490
~topic:group (Number of levels: 48)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.41 0.05 0.33 0.51 1.00 2840 5243
Regression Coefficients:
Estimate Est.Error l-95% CI u-95% CI Rhat
Intercept 1.96 0.17 1.61 2.33 1.00
persistence_dev1 -0.02 0.06 -0.14 0.09 1.00
anonymity_dev1 0.00 0.06 -0.12 0.12 1.00
benefits 0.11 0.02 0.08 0.14 1.00
costs -0.09 0.01 -0.12 -0.07 1.00
age 0.01 0.00 0.01 0.01 1.00
femaleTRUE 0.00 0.02 -0.05 0.05 1.00
pol_stance -0.02 0.01 -0.03 -0.00 1.00
persistence_dev1:anonymity_dev1 0.02 0.06 -0.10 0.14 1.00
Bulk_ESS Tail_ESS
Intercept 5127 4650
persistence_dev1 3572 5591
anonymity_dev1 3374 4915
benefits 11632 9554
costs 12027 9628
age 20747 13026
femaleTRUE 12495 10627
pol_stance 12865 10854
persistence_dev1:anonymity_dev1 3396 5453
Further Distributional Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
zi 0.08 0.01 0.06 0.10 1.00 12787 9891
Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
We find that increased perceived costs are associated with decreased opinion expressions. Increased benefits are associated with increased opinion expressions. Let’s check if overall effect is significant.
anon_costs_a_b <- fixef(fit_fe_costs_1)["anonymity_dev1", "Estimate"]
anon_costs_a_se <- fixef(fit_fe_costs_1)["anonymity_dev1", "Est.Error"]
anon_costs_a_dis <- rnorm(10000, anon_costs_a_b, anon_costs_a_se)
anon_costs_b_b <- fixef(fit_fe_med)["benefits", "Estimate"]
anon_costs_b_se <- fixef(fit_fe_med)["benefits", "Est.Error"]
anon_costs_b_dis <- rnorm(10000, anon_costs_b_b, anon_costs_b_se)
anon_costs_ab_dis <- anon_costs_a_dis * anon_costs_b_dis
anon_costs_ab_m <- median(anon_costs_ab_dis)
anon_costs_ab_ll <- quantile(anon_costs_ab_dis, .025)
anon_costs_ab_ul <- quantile(anon_costs_ab_dis, .975)The effect is significant (b = -0.01, 95% MC CI [-0.02, 0]).
Save
save.image("data/image.RData")